Subscribe Us

NEWS

提供全球集成电路、元器件及半导体业界新闻
让您及时掌握产业发展动态。
Can Photonics Power Next-Gen AI Chatbots?
2023-06-21

What have you asked ChatGPT to do lately? Chances are, if you haven’t tried the AI chatbot yourself, you know someone who has. From writing essays and code to explaining complex concepts, ChatGPT is blowing minds around the world with its speed and human-sounding prose. It’s also another example of how AI is becoming more accessible and pervasive in our smart-everything world.


As compute-intensive applications like AI and machine learning (ML) continue to become more ingrained in our lives, it’s worth considering the underlying infrastructure that makes these innovations possible. Simply put, these applications are demanding a heavy load from the hardware that processes the algorithms, runs the models and keeps data flowing. 


Hyperscale data centers with very high-performing compute resources have emerged to tackle the workloads of AI, high-performance computing and big data analytics. However, it is becoming increasingly clear that the traditional copper interconnects that bring together different components inside these data centers are hitting a bandwidth limit. This is where photonic integrated circuits (PICs), which use the power of light, can play a pivotal role.


Photonics can provide an avenue for not only a higher level of performance but also greater energy efficiency. Photonics can also support miniaturization, which can help minimize the footprint of power.  


The power price of executing AI models 


The authenticity of ChatGPT lies in its use of the Generative Pre-trained Transformer 3 (GPT-3) autoregressive language model from OpenAI, which uses deep learning to produce text. At 175 billion parameters, the architecture for this model surpasses the level of processing exhibited by the human brain (which is equivalent to a model of 100 billion parameters). 


AI models like these are putting huge demands on hardware components, such as memory, GPUs, CPUs and accelerators that process them. A hardware foundation comprised of large arrays of GPUs and high-bandwidth optical connectivity is needed to execute AI models, but all of this comes with some serious power (and cost) considerations.


Hyperscale data centers—which typically feature at least 5,000 servers managing petabytes or more of data in 100,000s of square feet of space—provide efficiency in their ability to quickly process voluminous amounts of data. However, this capacity and capability come with a huge power penalty: data center energy use was in the 220-to-330 Terawatt-hours (TWh) range in 2021, representing roughly 0.9% to 1.3% of global final electricity demand, according to an International Energy Agency report. This is more energy than some countries consume in one year.


In many data centers, the hardware components are typically connected via copper interconnects, while the connections between the racks in the centers often use optical fiber. The trend to use optical connections for increasingly shorter distances is now reaching a point where optical I/Os for core silicon, such as switches, CPUs, GPUs and die-to-die interconnects using photonics, are quickly emerging as an inevitable solution for next-generation data centers. By using the properties of light, photonic ICs can enable, extend and increase data transmission.


From a physics standpoint, nothing else can do what photonics can do to increase bandwidth and speed while also reducing latency and power consumption. This is just what data centers—and the AI chatbots that rely on them—need.


“As we drive discovery of the most optimal AI and quantum systems for an array of industries, including healthcare, finance, and industrial, we experience the real benefits of using photonics for a substantial uplift in bandwidth and speed,” QpiAI CEO Nagendra Nagaraja recently said. “Synopsys photonics solutions enable us to infuse our technologies with the speed of light to help our customers enhance their business outcomes.”


Optical highway for disaggregated data centers  


We are already seeing a shift in data center architectures toward disaggregation, where homogeneous resources like storage, compute and networking are separated in different boxes and connected optically. This type of architecture wastes no resources; instead, a central intelligence unit determines and takes what’s needed from each of the boxes, with the data traversing optical interconnects. The remaining resources can then be used for other workloads.


Aside from their use in rack-to-rack, room-to-room, and building-to-building configurations, optical interconnects could also become predominant at the CPU and GPU levels and take care of the data I/O using optical signals.

The desire to replace many parallel and high-speed serial electrical I/O lanes with optical high-bandwidth connections is driving the need for near-packaged optics, which use a high-performance PCB substrate (an interposer) on the host board, and co-packaged optics, a single package integration of electrical and photonic dies.


While photonics likely won’t replace traditional electronic semiconductor components in the short term, they clearly have a place at the table to address the bandwidth and latency requirements. Meanwhile, research is underway to understand the value of analog and digital computing in the optical domain. 


Addressing bandwidth barriers


Many expect the PIC market to grow substantially in the next decade. With their high-speed data transfer and low energy consumption, it’s obvious that PICs offer a way to break through bandwidth barriers while minimizing energy impacts.


Companies like LightMatter, with its photonic AI computing platform, and Ayar Labs, which develops optical interconnect chiplets, are among those at the forefront of developing new technologies to address bandwidth demands while reducing environmental impact. In addition, several companies are devising analog and digital compute solutions using photons instead of electrons to function as the arithmetic core.


However, PIC design is not as straightforward as designing traditional ICs. The performance of these circuits is tied to the material, as well as optical properties that are, in turn, tied to geometrical shapes. Achieving success in this realm calls for knowledge of the latest research, tools, and techniques, along with a depth of understanding of quantum and physical optics.


Synopsys, providing seamless design flow for photonic devices, systems and ICs, intends to help lead many companies down the path toward photonic design success. We also collaborate closely with major foundries on process design kits that streamline PIC development, and we work with governmental organizations on photonic education initiatives. Of course, Synopsys also invests in the technology, as exemplified by our collaboration with Juniper Networks in creating OpenLight, an open silicon photonics platform with integrated lasers.


From ChatGPT to the IoT and beyond, bandwidth-hungry applications are driving the need for higher speed data transfer. Photonic-based chips answer the call, taking advantage of the speed of light to deliver greater bandwidth, lower latency and lower power.


Subscribe for the latest news

Let us share the latest Flyking news and industry news. Let us be your trustworthy partner.

    Flyking needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time.

    QQOnline consultation
    +86-755-21675350Online consultation
    Online MessageOnline consultation
    Email Online consultation